Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Medicine (Baltimore) ; 101(29): e29587, 2022 Jul 22.
Article in English | MEDLINE | ID: covidwho-1961224

ABSTRACT

To tune and test the generalizability of a deep learning-based model for assessment of COVID-19 lung disease severity on chest radiographs (CXRs) from different patient populations. A published convolutional Siamese neural network-based model previously trained on hospitalized patients with COVID-19 was tuned using 250 outpatient CXRs. This model produces a quantitative measure of COVID-19 lung disease severity (pulmonary x-ray severity (PXS) score). The model was evaluated on CXRs from 4 test sets, including 3 from the United States (patients hospitalized at an academic medical center (N = 154), patients hospitalized at a community hospital (N = 113), and outpatients (N = 108)) and 1 from Brazil (patients at an academic medical center emergency department (N = 303)). Radiologists from both countries independently assigned reference standard CXR severity scores, which were correlated with the PXS scores as a measure of model performance (Pearson R). The Uniform Manifold Approximation and Projection (UMAP) technique was used to visualize the neural network results. Tuning the deep learning model with outpatient data showed high model performance in 2 United States hospitalized patient datasets (R = 0.88 and R = 0.90, compared to baseline R = 0.86). Model performance was similar, though slightly lower, when tested on the United States outpatient and Brazil emergency department datasets (R = 0.86 and R = 0.85, respectively). UMAP showed that the model learned disease severity information that generalized across test sets. A deep learning model that extracts a COVID-19 severity score on CXRs showed generalizable performance across multiple populations from 2 continents, including outpatients and hospitalized patients.


Subject(s)
COVID-19 , Deep Learning , COVID-19/diagnostic imaging , Humans , Lung , Radiography, Thoracic/methods , Radiologists
2.
Nat Med ; 27(10): 1735-1743, 2021 10.
Article in English | MEDLINE | ID: covidwho-1412139

ABSTRACT

Federated learning (FL) is a method used for training artificial intelligence models with data from multiple sources while maintaining data anonymity, thus removing many barriers to data sharing. Here we used data from 20 institutes across the globe to train a FL model, called EXAM (electronic medical record (EMR) chest X-ray AI model), that predicts the future oxygen requirements of symptomatic patients with COVID-19 using inputs of vital signs, laboratory data and chest X-rays. EXAM achieved an average area under the curve (AUC) >0.92 for predicting outcomes at 24 and 72 h from the time of initial presentation to the emergency room, and it provided 16% improvement in average AUC measured across all participating sites and an average increase in generalizability of 38% when compared with models trained at a single site using that site's data. For prediction of mechanical ventilation treatment or death at 24 h at the largest independent test site, EXAM achieved a sensitivity of 0.950 and specificity of 0.882. In this study, FL facilitated rapid data science collaboration without data exchange and generated a model that generalized across heterogeneous, unharmonized datasets for prediction of clinical outcomes in patients with COVID-19, setting the stage for the broader use of FL in healthcare.


Subject(s)
COVID-19/physiopathology , Machine Learning , Outcome Assessment, Health Care , COVID-19/therapy , COVID-19/virology , Electronic Health Records , Humans , Prognosis , SARS-CoV-2/isolation & purification
3.
Eur J Radiol ; 139: 109583, 2021 Jun.
Article in English | MEDLINE | ID: covidwho-1074725

ABSTRACT

PURPOSE: As of August 30th, there were in total 25.1 million confirmed cases and 845 thousand deaths caused by coronavirus disease of 2019 (COVID-19) worldwide. With overwhelming demands on medical resources, patient stratification based on their risks is essential. In this multi-center study, we built prognosis models to predict severity outcomes, combining patients' electronic health records (EHR), which included vital signs and laboratory data, with deep learning- and CT-based severity prediction. METHOD: We first developed a CT segmentation network using datasets from multiple institutions worldwide. Two biomarkers were extracted from the CT images: total opacity ratio (TOR) and consolidation ratio (CR). After obtaining TOR and CR, further prognosis analysis was conducted on datasets from INSTITUTE-1, INSTITUTE-2 and INSTITUTE-3. For each data cohort, generalized linear model (GLM) was applied for prognosis prediction. RESULTS: For the deep learning model, the correlation coefficient of the network prediction and manual segmentation was 0.755, 0.919, and 0.824 for the three cohorts, respectively. The AUC (95 % CI) of the final prognosis models was 0.85(0.77,0.92), 0.93(0.87,0.98), and 0.86(0.75,0.94) for INSTITUTE-1, INSTITUTE-2 and INSTITUTE-3 cohorts, respectively. Either TOR or CR exist in all three final prognosis models. Age, white blood cell (WBC), and platelet (PLT) were chosen predictors in two cohorts. Oxygen saturation (SpO2) was a chosen predictor in one cohort. CONCLUSION: The developed deep learning method can segment lung infection regions. Prognosis results indicated that age, SpO2, CT biomarkers, PLT, and WBC were the most important prognostic predictors of COVID-19 in our prognosis model.


Subject(s)
COVID-19 , Deep Learning , Electronic Health Records , Humans , Lung , Prognosis , SARS-CoV-2 , Tomography, X-Ray Computed
4.
Sci Rep ; 11(1): 858, 2021 01 13.
Article in English | MEDLINE | ID: covidwho-1065926

ABSTRACT

To compare the performance of artificial intelligence (AI) and Radiographic Assessment of Lung Edema (RALE) scores from frontal chest radiographs (CXRs) for predicting patient outcomes and the need for mechanical ventilation in COVID-19 pneumonia. Our IRB-approved study included 1367 serial CXRs from 405 adult patients (mean age 65 ± 16 years) from two sites in the US (Site A) and South Korea (Site B). We recorded information pertaining to patient demographics (age, gender), smoking history, comorbid conditions (such as cancer, cardiovascular and other diseases), vital signs (temperature, oxygen saturation), and available laboratory data (such as WBC count and CRP). Two thoracic radiologists performed the qualitative assessment of all CXRs based on the RALE score for assessing the severity of lung involvement. All CXRs were processed with a commercial AI algorithm to obtain the percentage of the lung affected with findings related to COVID-19 (AI score). Independent t- and chi-square tests were used in addition to multiple logistic regression with Area Under the Curve (AUC) as output for predicting disease outcome and the need for mechanical ventilation. The RALE and AI scores had a strong positive correlation in CXRs from each site (r2 = 0.79-0.86; p < 0.0001). Patients who died or received mechanical ventilation had significantly higher RALE and AI scores than those with recovery or without the need for mechanical ventilation (p < 0.001). Patients with a more substantial difference in baseline and maximum RALE scores and AI scores had a higher prevalence of death and mechanical ventilation (p < 0.001). The addition of patients' age, gender, WBC count, and peripheral oxygen saturation increased the outcome prediction from 0.87 to 0.94 (95% CI 0.90-0.97) for RALE scores and from 0.82 to 0.91 (95% CI 0.87-0.95) for the AI scores. AI algorithm is as robust a predictor of adverse patient outcome (death or need for mechanical ventilation) as subjective RALE scores in patients with COVID-19 pneumonia.


Subject(s)
Artificial Intelligence , COVID-19/diagnosis , COVID-19/therapy , Respiration, Artificial , Adult , Aged , Aged, 80 and over , COVID-19/diagnostic imaging , Cohort Studies , Female , Humans , Image Processing, Computer-Assisted , Lung/diagnostic imaging , Lung/pathology , Male , Middle Aged , Organ Size , Prognosis , Tomography, X-Ray Computed , Young Adult
5.
Med Image Anal ; 70: 101993, 2021 05.
Article in English | MEDLINE | ID: covidwho-1065467

ABSTRACT

In recent years, deep learning-based image analysis methods have been widely applied in computer-aided detection, diagnosis and prognosis, and has shown its value during the public health crisis of the novel coronavirus disease 2019 (COVID-19) pandemic. Chest radiograph (CXR) has been playing a crucial role in COVID-19 patient triaging, diagnosing and monitoring, particularly in the United States. Considering the mixed and unspecific signals in CXR, an image retrieval model of CXR that provides both similar images and associated clinical information can be more clinically meaningful than a direct image diagnostic model. In this work we develop a novel CXR image retrieval model based on deep metric learning. Unlike traditional diagnostic models which aim at learning the direct mapping from images to labels, the proposed model aims at learning the optimized embedding space of images, where images with the same labels and similar contents are pulled together. The proposed model utilizes multi-similarity loss with hard-mining sampling strategy and attention mechanism to learn the optimized embedding space, and provides similar images, the visualizations of disease-related attention maps and useful clinical information to assist clinical decisions. The model is trained and validated on an international multi-site COVID-19 dataset collected from 3 different sources. Experimental results of COVID-19 image retrieval and diagnosis tasks show that the proposed model can serve as a robust solution for CXR analysis and patient management for COVID-19. The model is also tested on its transferability on a different clinical decision support task for COVID-19, where the pre-trained model is applied to extract image features from a new dataset without any further training. The extracted features are then combined with COVID-19 patient's vitals, lab tests and medical histories to predict the possibility of airway intubation in 72 hours, which is strongly associated with patient prognosis, and is crucial for patient care and hospital resource planning. These results demonstrate our deep metric learning based image retrieval model is highly efficient in the CXR retrieval, diagnosis and prognosis, and thus has great clinical value for the treatment and management of COVID-19 patients.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Image Interpretation, Computer-Assisted , Tomography, X-Ray Computed , Algorithms , Female , Humans , Male , Middle Aged , Pandemics
6.
IEEE J Biomed Health Inform ; 24(12): 3529-3538, 2020 12.
Article in English | MEDLINE | ID: covidwho-970028

ABSTRACT

Early and accurate diagnosis of Coronavirus disease (COVID-19) is essential for patient isolation and contact tracing so that the spread of infection can be limited. Computed tomography (CT) can provide important information in COVID-19, especially for patients with moderate to severe disease as well as those with worsening cardiopulmonary status. As an automatic tool, deep learning methods can be utilized to perform semantic segmentation of affected lung regions, which is important to establish disease severity and prognosis prediction. Both the extent and type of pulmonary opacities help assess disease severity. However, manually pixel-level multi-class labelling is time-consuming, subjective, and non-quantitative. In this article, we proposed a hybrid weak label-based deep learning method that utilize both the manually annotated pulmonary opacities from COVID-19 pneumonia and the patient-level disease-type information available from the clinical report. A UNet was firstly trained with semantic labels to segment the total infected region. It was used to initialize another UNet, which was trained to segment the consolidations with patient-level information using the Expectation-Maximization (EM) algorithm. To demonstrate the performance of the proposed method, multi-institutional CT datasets from Iran, Italy, South Korea, and the United States were utilized. Results show that our proposed method can predict the infected regions as well as the consolidation regions with good correlation to human annotation.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Tomography, X-Ray Computed/methods , Algorithms , COVID-19/virology , Female , Humans , Male , Retrospective Studies , SARS-CoV-2/isolation & purification , Severity of Illness Index
7.
medRxiv ; 2020 Sep 18.
Article in English | MEDLINE | ID: covidwho-808139

ABSTRACT

PURPOSE: To improve and test the generalizability of a deep learning-based model for assessment of COVID-19 lung disease severity on chest radiographs (CXRs) from different patient populations. MATERIALS AND METHODS: A published convolutional Siamese neural network-based model previously trained on hospitalized patients with COVID-19 was tuned using 250 outpatient CXRs. This model produces a quantitative measure of COVID-19 lung disease severity (pulmonary x-ray severity (PXS) score). The model was evaluated on CXRs from four test sets, including 3 from the United States (patients hospitalized at an academic medical center (N=154), patients hospitalized at a community hospital (N=113), and outpatients (N=108)) and 1 from Brazil (patients at an academic medical center emergency department (N=303)). Radiologists from both countries independently assigned reference standard CXR severity scores, which were correlated with the PXS scores as a measure of model performance (Pearson r). The Uniform Manifold Approximation and Projection (UMAP) technique was used to visualize the neural network results. RESULTS: Tuning the deep learning model with outpatient data improved model performance in two United States hospitalized patient datasets (r=0.88 and r=0.90, compared to baseline r=0.86). Model performance was similar, though slightly lower, when tested on the United States outpatient and Brazil emergency department datasets (r=0.86 and r=0.85, respectively). UMAP showed that the model learned disease severity information that generalized across test sets. CONCLUSIONS: Performance of a deep learning-based model that extracts a COVID-19 severity score on CXRs improved using training data from a different patient cohort (outpatient versus hospitalized) and generalized across multiple populations.

SELECTION OF CITATIONS
SEARCH DETAIL